12 research outputs found

    Large-scale mobile audio environments for collaborative musical interaction.

    Get PDF
    ABSTRACT New application spaces and artistic forms can emerge when users are freed from constraints. In the general case of human-computer interfaces, users are often confined to a fixed location, severely limiting mobility. To overcome this constraint in the context of musical interaction, we present a system to manage large-scale collaborative mobile audio environments, driven by user movement. Multiple participants navigate through physical space while sharing overlaid virtual elements. Each user is equipped with a mobile computing device, GPS receiver, orientation sensor, microphone, headphones, or various combinations of these technologies. We investigate methods of location tracking, wireless audio streaming, and state management between mobile devices and centralized servers. The result is a system that allows mobile users, with subjective 3-D audio rendering, to share virtual scenes. The audio elements of these scenes can be organized into large-scale spatial audio interfaces, thus allowing for immersive mobile performance, locative audio installations, and many new forms of collaborative sonic activity

    ABSTRACT Convolution Brother's Instrument design

    No full text
    The subject of instrument design is quite broad. Much work ha

    Survey and thematic analysis approach as input to the design of mobile music guis

    No full text
    Mobile devices represent a growing research field within NIME, and a growing area for commercial music software. They present unique design challenges and opportunities, which are yet to be fully explored and exploited. In this paper, we propose using a survey method combined with qualitative analysis to investigate the way in which people use mobiles musically. We subsequently present as an area of future research our own PDplayer, which provides a completely self contained end application in the mobile device, potentially making the mobile a more viable and expressive tool for musicians

    USER-SPECIFIC AUDIO RENDERING AND STEERABLE SOUND FOR DISTRIBUTED VIRTUAL ENVIRONMENTS

    No full text
    We present a method for user-specific audio rendering of a virtual environment that is shared by multiple participants. The technique differs from methods such as amplitude differencing, HRTF filtering, and wave field synthesis. Instead we model virtual microphones within the 3-D scene, each of which captures audio to be rendered to a loudspeaker. Spatialization of sound sources is accomplished via acoustic physical modelling, yet our approach also allows for localized signal processing within the scene. In order to control the flow of sound within the scene, the user has the ability to steer audio in specific directions. This paradigm leads to many novel applications where groups of individuals can share one continuous interactive sonic space. [Keywords: multi-user, spatialization, 3-D arrangement of DSP, steerable audio] 1
    corecore